Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
13.0 - 20.0 years
30 - 45 Lacs
Pune
Hybrid
Hi, Wishes from GSN!!! Pleasure connecting with you!!! We been into Corporate Search Services for Identifying & Bringing in Stellar Talented Professionals for our reputed IT / Non-IT clients in India. We have been successfully providing results to various potential needs of our clients for the last 20 years. At present, GSN is hiring DATA ENGINEERING - Solution Architect for one of our leading MNC client. PFB the details for your better understanding : 1. WORK LOCATION : PUNE 2. Job Role: DATA ENGINEERING - Solution Architect 3. EXPERIENCE : 13+ yrs 4. CTC Range: Rs. 35 LPA to Rs. 50 LPA 5. Work Type : WFO Hybrid ****** Looking for SHORT JOINERS ****** Job Description : Who are we looking for : Architectural Vision & Strategy: Define and articulate the technical vision, strategy and roadmap for Big Data, data streaming, and NoSQL solutions , aligning with overall enterprise architecture and business goals. Required Skills : 13+ years of progressive EXP in software development, data engineering and solution architecture roles, with a strong focus on large-scale distributed systems. Expertise in Big Data Technologies: Apache Spark: Deep expertise in Spark architecture, Spark SQL, Spark Streaming, performance tuning, and optimization techniques. Experience with data processing paradigms (batch and real-time). Hadoop Ecosystem: Strong understanding of HDFS, YARN, Hive and other related Hadoop components . Real-time Data Streaming: Apache Kafka: Expert-level knowledge of Kafka architecture, topics, partitions, producers, consumers, Kafka Streams, KSQL, and best practices for high-throughput, low-latency data pipelines. NoSQL Databases: Couchbase: In-depth experience with Couchbase OR MongoDB OR Cassandra), including data modeling, indexing, querying (N1QL), replication, scaling, and operational best practices. API Design & Development: Extensive experience in designing and implementing robust, scalable and secure APIs (RESTful, GraphQL) for data access and integration. Programming & Code Review: Hands-on coding proficiency in at least one relevant language ( Python, Scala, Java ) with a preference for Python and/or Scala for data engineering tasks. Proven experience in leading and performing code reviews, ensuring code quality, performance, and adherence to architectural guidelines. Cloud Platforms: Extensive EXP in designing and implementing solutions on at least one major cloud platform ( AWS, Azure, GCP ), leveraging their Big Data, streaming, and compute services . Database Fundamentals: Solid understanding of relational database concepts, SQL, and data warehousing principles. System Design & Architecture Patterns: Deep knowledge of various architectural patterns (e.g., Microservices, Event-Driven Architecture, Lambda/Kappa Architecture, Data Mesh ) and their application in data solutions. DevOps & CI/CD: Familiarity with DevOps principles, CI/CD pipelines, infrastructure as code (IaC) and automated deployment strategies for data platforms . ****** Looking for SHORT JOINERS ****** Interested, don't hesitate to call NAK @ 9840035825 / 9244912300 for IMMEDIATE response. Best, ANANTH | GSN | Google review : https://g.co/kgs/UAsF9W
Posted 2 weeks ago
7.0 - 11.0 years
0 Lacs
coimbatore, tamil nadu
On-site
As a Data Engineer specializing in supply chain applications, you will play a crucial role in the Supply Chain Analytics team at NovintiX, based in Coimbatore, India. Your primary responsibility will be to design, develop, and optimize scalable data solutions that support various aspects of logistics, procurement, and demand planning. Your key responsibilities will include building and enhancing data pipelines for inventory, shipping, and procurement data, integrating data from ERP, PLM, and third-party sources, and creating APIs to facilitate seamless data exchange. Additionally, you will be tasked with designing and maintaining enterprise-grade data lakes and warehouses while ensuring high standards of data quality, integrity, and security. Collaborating with stakeholders, you will be involved in developing reporting dashboards using tools like Power BI, Tableau, or QlikSense to support supply chain decision-making through data-driven insights. You will also work on building data models and algorithms for demand forecasting and logistics optimization, leveraging ML libraries and concepts for predictive analysis. Your role will involve cross-functional collaboration with supply chain, logistics, and IT teams, translating complex technical solutions into business language to drive operational efficiency. Implementing robust data governance frameworks and ensuring data compliance and audit readiness will be essential aspects of your job. To qualify for this position, you should have at least 7 years of experience in Data Engineering, a Bachelor's degree in Computer Science/IT or a related field, and expertise in technologies such as Python, Java, SQL, Spark SQL, Hadoop, PySpark, NoSQL, Power BI, Tableau, QlikSense, Azure Data Factory, Azure Databricks, and AWS. Strong collaboration, communication skills, and experience in fast-paced, agile environments are also desired. This is a full-time position based in Coimbatore, Tamil Nadu, requiring in-person work. If you are passionate about leveraging data to drive supply chain efficiency and are ready to take on this exciting challenge, please send your resume to shanmathi.saravanan@novintix.com before the application deadline on 13/07/2025.,
Posted 2 weeks ago
8.0 - 12.0 years
12 - 18 Lacs
Noida
Work from Office
General Roles & Responsibilities: Technical Leadership: Demonstrate leadership, and ability to guide business and technology teams in adoption of best practices and standards Design & Development: Design, develop, and maintain robust, scalable, and high-performance data estate Architecture: Architect and design robust data solutions that meet business requirements & include scalability, performance, and security. Quality: Ensure the quality of deliverables through rigorous reviews, and adherence to standards. Agile Methodologies: Actively participate in agile processes, including planning, stand-ups, retrospectives, and backlog refinement. Collaboration: Work closely with system architects, data engineers, data scientists, data analysts, cloud engineers and other business stakeholders to determine optimal solution & architecture that is future-proof too. Innovation: Stay updated with the latest industry trends and technologies, and drive continuous improvement initiatives within the development team. Documentation: Create and maintain technical documentation, including design documents, and architectural user guides. Technical Responsibilities: Optimize data pipelines for performance and efficiency. Work with Databricks clusters and configuration management tools. Use appropriate tools in the cloud data lake development and deployment. Developing/implementing cloud infrastructure to support current and future business needs. Provide technical expertise and ownership in the diagnosis and resolution of issues. Ensure all cloud solutions exhibit a higher level of cost efficiency, performance, security, scalability, and reliability. Manage cloud data lake development and deployment on AWSDatabricks. Manage and create workspaces, configure cloud resources, view usage data, and manage account identities, settings, and subscriptions in Databricks Required Technical Skills: Experience & Proficiency with Databricks platform - Delta Lake storage, Spark (PySpark, Spark SQL). Must be well versed with Databricks Lakehouse, Unity Catalog concept and its implementation in enterprise environments. Familiarity of data design pattern - medallion architecture to organize data in a Lakehouse. Experience & Proficiency with AWS Data Services S3, Glue, Athena, Redshift etc.| Airflow scheduling Proficiency in SQL and experience with relational databases. Proficiency in at least one programming language (e.g., Python, Java) for data processing and scripting. Experience with DevOps practices - AWS DevOps for CI/CD, Terraform/CDK for infrastructure as code Good understanding of data principles, Cloud Data Lake design & development including data ingestion, data modeling and data distribution. Jira: Proficient in using Jira for managing projects and tracking progress. Other Skills: Strong communication and interpersonal skills. Collaborate with data stewards, data owners, and IT teams for effective implementation Understanding of business processes and terminology preferably Logistics Experienced with Scrum and Agile Methodologies Qualification Bachelors degree in information technology or a related field. Equivalent experience may be considered. Overall experience of 8-12 years in Data Engineering Mandatory Competencies Data Science and Machine Learning - Data Science and Machine Learning - Databricks Data on Cloud - Azure Data Lake (ADL) Agile - Agile Data Analysis - Data Analysis Big Data - Big Data - Pyspark Data on Cloud - AWS S3 Data on Cloud - Redshift ETL - ETL - AWS Glue Python - Python DevOps - CI/CD Beh - Communication and collaboration Cloud - Azure - Azure Data Factory (ADF), Azure Databricks, Azure Data Lake Storage, Event Hubs, HDInsight Database - Database Programming - SQL Agile - Agile - SCRUM QA/QE - QA Analytics - Data Analysis Cloud - AWS - AWS S3, S3 glacier, AWS EBS Cloud - AWS - Tensorflow on AWS, AWS Glue, AWS EMR, Amazon Data Pipeline, AWS Redshift Programming Language - Python - Python Shell Development Tools and Management - Development Tools and Management - CI/CD Cloud - AWS - AWS Lambda,AWS EventBridge, AWS Fargate
Posted 2 weeks ago
4.0 - 12.0 years
0 Lacs
karnataka
On-site
As a Big Data Lead with 7-12 years of experience, you will be responsible for leading the development of data processing systems and applications, specifically in the areas of Data Warehousing (DWH). Your role will involve utilizing your strong software development skills in multiple computing languages, with a focus on distributed data processing systems and BIDW programs. You should have a minimum of 4 years of software development experience and a proven track record in developing and testing applications, preferably on the J2EE stack. A sound understanding of best practices and concepts related to Data Warehouse Applications is crucial for this role. Additionally, you should possess a strong foundation in distributed systems and computing systems, with hands-on experience in Spark & Scala, Kafka, Hadoop, Hbase, Pig, and Hive. Experience with NoSQL data stores, data modeling, and data management will be beneficial for this role. Strong interpersonal communication skills are essential, along with excellent oral and written communication abilities. Knowledge of Data Lake implementation as an alternative to Data Warehousing is desirable. Hands-on experience with Spark SQL and SQL proficiency are mandatory requirements for this role. You should have a minimum of 2 end-to-end implementations in either Data Warehousing or Data Lake projects. Your role as a Big Data Lead will involve collaborating with cross-functional teams and driving data-related initiatives to meet business objectives effectively.,
Posted 2 weeks ago
8.0 - 12.0 years
0 Lacs
karnataka
On-site
You are a strategic thinker passionate about driving solutions in Data Analytics. You have found the right team. As an Analytics Solutions Vice President in our Finance team, you will define, refine, and deliver our firm's goals. If you're a skilled data professional passionate about transforming raw data into actionable insights and eager to learn and implement new technologies, you've found the right team. Join us in the Finance Data & Insights Team, an agile product team focused on developing, producing, and transforming financial data and reporting across CCB. Your role will involve creating data visualizations and intelligence solutions for top leaders to achieve strategic goals. You'll identify opportunities to eliminate manual processes and use automation tools like Alteryx, Tableau, and ThoughtSpot to develop automated solutions. Additionally, you'll extract, analyze, and summarize data for ad hoc requests and contribute to modernizing our data environment to a cloud platform. Job responsibilities: - Lead Data & Analytics requirements gathering sessions with varying levels of leadership and complete detailed project planning using JIRA to record planned project execution steps. - Understand databases, ETL processes, and translate logic into requirements for the Technology team. - Develop and enhance Alteryx workflows by collecting data from disparate sources and summarizing it as defined in requirements gathering with stakeholders, following best practices to source data from authoritative sources. - Develop data visualization solutions using Tableau and/or ThoughtSpot to provide intuitive insights to key stakeholders. - Conduct thorough control testing of each component of the intelligence solution, providing evidence that all data and visualizations offer accurate insights and evidence in the control process. - Seek to understand stakeholder use cases to anticipate their requirements, questions, and objections. - Become a subject matter expert in these responsibilities and support team members in becoming more proficient. Required qualifications, capabilities, and skills: - Bachelor's degree in MIS or Computer Science, Mathematics, Engineering, Statistics, or other quantitative or financial subject areas - People management experience of at least 3 years is required - Experience with business intelligence analytic and data wrangling tools such as Alteryx, SAS, or Python - Experience with relational databases optimizing SQL to pull and summarize large datasets, report creation and ad-hoc analyses, Databricks, Cloud solutions - Experience in reporting development and testing, and ability to interpret unstructured data and draw objective inferences given known limitations of the data - Demonstrated ability to think beyond raw data and to understand the underlying business context and sense business opportunities hidden in data - Strong written and oral communication skills; ability to communicate effectively with all levels of management and partners from a variety of business functions - Experience with ThoughtSpot or similar tools empowering stakeholders to better understand their data - Highly motivated, self-directed, curious to learn new technologies Preferred qualifications, capabilities, and skills: - Experience with ThoughtSpot / Python major advantage - Experience with AI/ML or LLM added advantage but not a must-have. Minimum 8 years experience developing advanced data visualization and presentations preferably with Tableau - Experience with Hive, Spark SQL, Impala, or other big-data query tools. AWS, Databricks, Snowflake, or other Cloud Data Warehouse experience - Minimum of 8 years experience working with data analytics projects, preferably related to financial services domain,
Posted 2 weeks ago
6.0 - 10.0 years
0 Lacs
hyderabad, telangana
On-site
You should have a minimum of 6 years of experience in the technical field and possess the following skills: Python, Spark SQL, PySpark, Apache Airflow, DBT, Snowflake, CI/CD, Git, GitHub, and AWS. Your role will involve understanding the existing code base in AWS services and SQL, and converting it to a tech stack primarily using Airflow, Iceberg, Python, and SQL. Your responsibilities will include designing and building data models to support business requirements, developing and maintaining data ingestion and processing systems, implementing data storage solutions, ensuring data consistency and accuracy through validation and cleansing techniques, and collaborating with cross-functional teams to address data-related issues. Proficiency in Python, experience with big data Spark, orchestration experience with Airflow, and AWS knowledge are essential for this role. You should also have experience in security and governance practices such as role-based access control (RBAC) and data lineage tools, as well as knowledge of database management systems like MySQL. Strong problem-solving and analytical skills, along with excellent communication and collaboration abilities, are key attributes for this position. At NucleusTeq, we foster a positive and supportive culture that encourages our associates to perform at their best every day. We value and celebrate individual uniqueness, offering flexibility for making daily choices that contribute to overall well-being. Our well-being programs and continuous efforts to enhance our culture aim to create an environment where our people can thrive, lead healthy lives, and excel in their roles.,
Posted 3 weeks ago
3.0 - 6.0 years
5 - 8 Lacs
Hyderabad, Bengaluru, Delhi / NCR
Work from Office
As a Senior Azure Data Engineer, your responsibilities will include: Building scalable data pipelines using Databricks and PySpark Transforming raw data into usable business insights Integrating Azure services like Blob Storage, Data Lake, and Synapse Analytics Deploying and maintaining machine learning models using MLlib or TensorFlow Executing large-scale Spark jobs with performance tuning on Spark Pools Leveraging Databricks Notebooks and managing workflows with MLflow Qualifications: Bachelors/Masters in Computer Science, Data Science, or equivalent 7+ years in Data Engineering, with 3+ years in Azure Databricks Strong hands-on in: PySpark, Spark SQL, RDDs, Pandas, NumPy, Delta Lake Azure ecosystem: Data Lake, Blob Storage, Synapse Analytics Location: Remote- Bengaluru,Hyderabad,Delhi / NCR,Chennai,Pune,Kolkata,Ahmedabad,Mumbai
Posted 3 weeks ago
10.0 - 20.0 years
20 - 35 Lacs
Pune
Hybrid
Hi, Wishes from GSN!!! Pleasure connecting with you!!! We been into Corporate Search Services for Identifying & Bringing in Stellar Talented Professionals for our reputed IT / Non-IT clients in India. We have been successfully providing results to various potential needs of our clients for the last 20 years. At present, GSN is one of our leading MNC client. PFB the details for your better understanding: WORK LOCATION: PUNE Job Role: Big Data Solution Architect EXPERIENCE: 10 Yrs - 20 Yrs CTC Range: 25 LPA -35 LPA Work Type: Hybrid Required Skills & Experience: 10+ years of progressive experience in software development, data engineering, and solution architecture roles, with a strong focus on large-scale distributed systems. Expertise in Big Data Technologies : Apache Spark : Deep expertise in Spark architecture, Spark SQL, Spark Streaming, performance tuning, and optimization techniques. Experience with data processing paradigms (batch and real-time). Hadoop Ecosystem : Strong understanding of HDFS, YARN, Hive, and other related Hadoop components. Real-time Data Streaming: Apache Kafka: Expert-level knowledge of Kafka architecture, topics, partitions, producers, consumers, Kafka Streams, KSQL, and best practices for high-throughput, low-latency data pipelines. NoSQL Databases: Couchbase: In-depth experience with Couchbase (or similar document/key-value NoSQL databases like MongoDB, Cassandra), including data modeling, indexing, querying (N1QL), replication, scaling, and operational best practices. API Design & Development: Extensive experience in designing and implementing robust, scalable, and secure APIs (RESTful, GraphQL) for data access and integration. Programming & Code Review: Hands-on coding proficiency in at least one relevant language (Python, Scala, Java) with a preference for Python and/or Scala for data engineering tasks. Proven experience in leading and performing code reviews, ensuring code quality, performance, and adherence to architectural guidelines. Cloud Platforms : Extensive experience designing and implementing solutions on at least one major cloud platform (AWS, Azure, GCP), leveraging their Big Data, streaming, and compute services. Database Fundamentals: Solid understanding of relational database concepts, SQL, and data warehousing principles. System Design & Architecture Patterns : Deep knowledge of various architectural patterns (e.g., Microservices, Event-Driven Architecture, Lambda/Kappa Architecture, Data Mesh) and their application in data solutions. DevOps & CI/CD: Familiarity with DevOps principles, CI/CD pipelines, infrastructure as code (IaC), and automated deployment strategies for data platforms. If interested, kindly APPLY for IMMEDIATE response Thanks & Rgds SHOBANA GSN | Mob : 8939666294 (Whatsapp) | Email :Shobana@gsnhr.net | Web : www.gsnhr.net Google Reviews : https://g.co/kgs/UAsF9W
Posted 3 weeks ago
10.0 - 20.0 years
25 - 40 Lacs
Chennai, pune,gurgaon, Hyderabad/Bangalore
Work from Office
Function: Software Engineering Big Data / DWH / ETL Azure Data Factory Azure Synapse ETL Spark SQL Scala Responsibilities: Designing and implementing scalable and efficient data architectures. Creating data models and optimizing data structures for performance and usability. Implementing and managing data lakehouses and real-time analytics solutions using Microsoft Fabric. Leveraging Fabric's OneLake, Dataflows, and Synapse Data Engineering for seamless data management. Enabling end-to-end analytics and AI-powered insights. Developing and orchestrating data pipelines in Azure Data Factory. Managing ETL/ELT processes for data integration across various sources. Optimizing data workflows for performance and cost efficiency. Designing interactive dashboards and reports in Power BI. Implementing data models, DAX calculations, and performance optimizations. Ensuring data quality, security, and governance in reporting solutions. Requirements: Data Architect with 10+ years of experience in Microsoft Fabric skills, designs, and implements data solutions using Fabric, focusing on data integration, analytics, and automation, while ensuring data quality, security, and compliance. Primary Skills (Must Have): Azure Data Pipeline, Apache Spark, ETL, Azure Factory, Azure Synapse, Azure Functions, Spark SQL, SQL. Secondary Skills (Good to Have): Other Azure Services, Python/Scala, DataStage (preferably), and Fabric
Posted 1 month ago
6.0 - 10.0 years
30 - 35 Lacs
Bengaluru
Work from Office
We are seeking an experienced PySpark Developer / Data Engineer to design, develop, and optimize big data processing pipelines using Apache Spark and Python (PySpark). The ideal candidate should have expertise in distributed computing, ETL workflows, data lake architectures, and cloud-based big data solutions. Key Responsibilities: Develop and optimize ETL/ELT data pipelines using PySpark on distributed computing platforms (Hadoop, Databricks, EMR, HDInsight). Work with structured and unstructured data to perform data transformation, cleansing, and aggregation. Implement data lake and data warehouse solutions on AWS (S3, Glue, Redshift), Azure (ADLS, Synapse), or GCP (BigQuery, Dataflow). Optimize PySpark jobs for performance tuning, partitioning, and caching strategies. Design and implement real-time and batch data processing solutions. Integrate data pipelines with Kafka, Delta Lake, Iceberg, or Hudi for streaming and incremental updates. Ensure data security, governance, and compliance with industry best practices. Work with data scientists and analysts to prepare and process large-scale datasets for machine learning models. Collaborate with DevOps teams to deploy, monitor, and scale PySpark jobs using CI/CD pipelines, Kubernetes, and containerization. Perform unit testing and validation to ensure data integrity and reliability. Required Skills & Qualifications: 6+ years of experience in big data processing, ETL, and data engineering. Strong hands-on experience with PySpark (Apache Spark with Python). Expertise in SQL, DataFrame API, and RDD transformations. Experience with big data platforms (Hadoop, Hive, HDFS, Spark SQL). Knowledge of cloud data processing services (AWS Glue, EMR, Databricks, Azure Synapse, GCP Dataflow). Proficiency in writing optimized queries, partitioning, and indexing for performance tuning. Experience with workflow orchestration tools like Airflow, Oozie, or Prefect. Familiarity with containerization and deployment using Docker, Kubernetes, and CI/CD pipelines. Strong understanding of data governance, security, and compliance (GDPR, HIPAA, CCPA, etc.). Excellent problem-solving, debugging, and performance optimization skills.
Posted 1 month ago
12.0 - 14.0 years
12 - 20 Lacs
Hyderabad, Bengaluru
Hybrid
Please Note - NP should be 0-15 days Looking for 10+ Y / highly experienced and deeply hands-on Data Architect to lead the design, build, and optimization of our data platforms on AWS and Databricks. This role requires a strong blend of architectural vision and direct implementation expertise, ensuring scalable, secure, and performant data solutions from concept to production. Strong hand on exp in data engineering/architecture, hands-on architectural and implementation experience on AWS and Databricks, Schema modeling . AWS: Deep hands-on expertise with key AWS data services and infrastructure. Databricks: Expert-level hands-on development with Databricks (Spark SQL, PySpark), Delta Lake, and Unity Catalog. Coding: Exceptional proficiency in Python , Pyspark , Spark , AWS Services and SQL. Architectural: Strong data modeling and architectural design skills with a focus on practical implementation. Preferred: AWS/Databricks certifications, experience with streaming technologies, and other data tools. Design & Build: Lead and personally execute the design, development, and deployment of complex data architectures and pipelines on AWS (S3, Glue, Lambda, Redshift, etc.) and Databricks (PySpark/Spark SQL, Delta Lake, Unity Catalog). Databricks Expertise: Own the hands-on development, optimization, and performance tuning of Databricks jobs, clusters, and notebooks.
Posted 1 month ago
7.0 - 10.0 years
10 - 14 Lacs
Bengaluru
Work from Office
The Data Scientist-3 in Bangalore (or Mumbai) will be part of the 811 Data Strategy Group that comprises Data Engineers, Data Scientists and Data Analytics professionals. He/she will be associated with one of the key functional areas such as Product Strategy, Cross Sell, Asset Risk, Fraud Risk, Customer Experience etc. and help build robust and scalable solutions that are deployed for real time or near real time consumption and integrated into our proprietary Customer Data Platform (CDP). This is an exciting opportunity to work on data driven analytical solutions and have a profound influence on the growth trajectory of a super fast evolving digital product. Key Requirements of The Role Advanced degree in an analytical field (e.g., Data Science, Computer Science, Engineering, Applied Mathematics, Statistics, Data Analysis) or substantial hands on work experience in the space 7 - 10 Years of relevant experience in the space Expertise in mining AI/ML opportunities from open ended business problems and drive solution design/development while closely collaborating with engineering, product and business teams Strong understanding of advanced data mining techniques, curating, processing and transforming data to produce sound datasets. Strong experience in NLP, time series forecasting and recommendation engines preferred Create great data stories with expertise in robust EDA and statistical inference. Should have at least a foundational understanding in Experimentation design ? Strong understanding of the Machine Learning lifecycle - feature engineering, training, validation, scaling, deployment, scoring, monitoring, and feedback loop. Exposure to Deep Learning applications and tools like TensorFlow, Theano, Torch, Caffe preferred Experience with analytical programming languages, tools and libraries (Python a must) as well as Shell scripting. Should be proficient in developing production ready code as per best practices. Experience in using Scala/Java/Go based libraries a big plus Very proficient is SQL and other relational databases along with PySpark or Spark SQL. Proficient is using NoSQL databases. Experience in using GraphDBs like Neo4j a plus. Candidate should be able to handle unstructured data with ease. Candidate should have experience in working with MLEs and be proficient (with experience) in using MLOps tools. Should be able to consume the capabilities of said tools with deep understanding of deployment lifecycle. Experience in CI/CD deployment is a big plus. Knowledge of key concepts in distributed systems like replication, serialization, concurrency control etc. a big plus Good understanding of programming best practices and building code artifacts for reuse. Should be comfortable with version controlling and collaborate comfortably in tools like git Ability to create frameworks that can perform model RCAs using analytical and interpretability tools. Should be able to peer review model documentations/code bases and find opportunities Experience in end-to-end delivery of AI driven Solutions (Deep learning , traditional data science projects) Strong communication, partnership and teamwork skills ? Should be able to guide and mentor teams while leading them by example. Should be an integral part of creating a team culture focused on driving collaboration, technical expertise and partnerships with other teams ? Ability to work in an extremely fast paced environment, meet deadlines, and perform at high standards with limited supervision A self-starter who is looking to build grounds up and contribute to the making of a potential big name in the space ? Experience in Banking and financial services is a plus. However, sound logical reasoning and first principles problem solving are even more critical job role: 1. As a key partner at the table, attend key meetings with the business team to bring in the data perspective to the discussions 2. Perform comprehensive data explorations around to generate inquisitive insights and scope out the problem 3. Develop simplistic to advanced solutions to address the problem at hand. We believe in making swift (albeit sometimes marginal) impact to business KPIs and hence adopt an MVP approach to solution development 4. Build re-usable code analytical frameworks to address commonly occurring business questions 5. Perform 360-degree customer profiling and opportunity analyses to guide new product strategy. This is a nascent business and hence opportunities to guide business strategy are plenty 6. Guide team members on data science and analytics best practices to help them overcome bottlenecks and challenges 7. The role will be an approximate 60% IC 40% leading and the ratios can vary basis need and fit 8. Develop Customer-360 Features that will be integrated into the Customer Data Platform (CDP) to enhance the single view of our customer
Posted 1 month ago
3.0 - 5.0 years
22 - 25 Lacs
Bengaluru
Work from Office
Job Description: We are looking for energetic, self-motivated and exceptional Data engineer to work on extraordinary enterprise products based on AI and Big Data engineering leveraging AWS/Databricks tech stack. He/she will work with star team of Architects, Data Scientists/AI Specialists, Data Engineers and Integration. Skills and Qualifications: 5+ years of experience in DWH/ETL Domain; Databricks/AWS tech stack 2+ years of experience in building data pipelines with Databricks/ PySpark /SQL Experience in writing and interpreting SQL queries, designing data models and data standards. Experience in SQL Server databases, Oracle and/or cloud databases. Experience in data warehousing and data mart, Star and Snowflake model. Experience in loading data into database from databases and files. Experience in analyzing and drawing design conclusions from data profiling results. Understanding business process and relationship of systems and applications. Must be comfortable conversing with the end-users. Must have ability to manage multiple projects/clients simultaneously. Excellent analytical, verbal and communication skills. Role and Responsibilities: Work with business stakeholders and build data solutions to address analytical & reporting requirements. Work with application developers and business analysts to implement and optimise Databricks/AWS-based implementations meeting data requirements. Design, develop, and optimize data pipelines using Databricks (Delta Lake, Spark SQL, PySpark), AWS Glue, and Apache Airflow Implement and manage ETL workflows using Databricks notebooks, PySpark and AWS Glue for efficient data transformation Develop/ optimize SQL scripts, queries, views, and stored procedures to enhance data models and improve query performance on managed databases. Conduct root cause analysis and resolve production problems and data issues. Create and maintain up to date documentation of the data model, data flow and field level mappings. Provide support for production problems and daily batch processing. Provide ongoing maintenance and optimization of database schemas, data lake structures (Delta Tables, Parquet), and views to ensure data integrity and performance
Posted 1 month ago
3.0 - 5.0 years
22 - 25 Lacs
Bengaluru
Work from Office
We are looking for energetic, self-motivated and exceptional Data engineer to work on extraordinary enterprise products based on AI and Big Data engineering leveraging AWS/Databricks tech stack. He/she will work with star team of Architects, Data Scientists/AI Specialists, Data Engineers and Integration. Skills and Qualifications: 5+ years of experience in DWH/ETL Domain; Databricks/AWS tech stack 2+ years of experience in building data pipelines with Databricks/ PySpark /SQL Experience in writing and interpreting SQL queries, designing data models and data standards. Experience in SQL Server databases, Oracle and/or cloud databases. Experience in data warehousing and data mart, Star and Snowflake model. Experience in loading data into database from databases and files. Experience in analyzing and drawing design conclusions from data profiling results. Understanding business process and relationship of systems and applications. Must be comfortable conversing with the end-users. Must have ability to manage multiple projects/clients simultaneously. Excellent analytical, verbal and communication skills. Role and Responsibilities: Work with business stakeholders and build data solutions to address analytical & reporting requirements. Work with application developers and business analysts to implement and optimise Databricks/AWS-based implementations meeting data requirements. Design, develop, and optimize data pipelines using Databricks (Delta Lake, Spark SQL, PySpark), AWS Glue, and Apache Airflow Implement and manage ETL workflows using Databricks notebooks, PySpark and AWS Glue for efficient data transformation Develop/ optimize SQL scripts, queries, views, and stored procedures to enhance data models and improve query performance on managed databases. Conduct root cause analysis and resolve production problems and data issues. Create and maintain up to date documentation of the data model, data flow and field level mappings. Provide support for production problems and daily batch processing. Provide ongoing maintenance and optimization of database schemas, data lake structures (Delta Tables, Parquet), and views to ensure data integrity and performance
Posted 1 month ago
10.0 - 12.0 years
12 - 14 Lacs
Hyderabad
Work from Office
About the Roe: Grade Leve (for interna use): 11 The Team: Our team is responsibe for the design, architecture, and deveopment of our cient facing appications using a variety of toos that are reguary updated as new technoogies emerge. You wi have the opportunity every day to work with peope from a wide variety of backgrounds and wi be abe to deveop a cose team dynamic with coworkers from around the gobe. The Impact: The work you do wi be used every singe day, its the essentia code you write that provides the data and anaytics required for crucia, daiy decisions in the capita and commodities markets. Whats in it for you: Buid a career with a goba company. Work on code that fues the goba financia markets. Grow and improve your skis by working on enterprise eve products and new technoogies. Responsibiities: Sove probems, anayze and isoate issues.Provide technica guidance and mentoring to the team and hep them adopt change as new processes are introduced.Champion best practices and serve as a subject matter authority.Deveop soutions to deveop/support key business needs.Engineer components and common services based on standard deveopment modes, anguages and toosProduce system design documents and ead technica wakthroughsProduce high quaity codeCoaborate effectivey with technica and non-technica partnersAs a team-member shoud continuousy improve the architecture Basic Quaifications: 10-12 years of experience designing/buiding data-intensive soutions using distributed computing.Proven experience in impementing and maintaining enterprise search soutions in arge-scae environments.Experience working with business stakehoders and users, providing research direction and soution design and writing robust maintainabe architectures and APIs.Experience deveoping and depoying Search soutions in a pubic coud such as AWS.Proficient programming skis at a high-eve anguages -Java, Scaa, PythonSoid knowedge of at east one machine earning research frameworksFamiiarity with containerization, scripting, coud patforms, and CI/CD.5+ years experience with Python, Java, Kubernetes, and data and workfow orchestration toos4+ years experience with Easticsearch, SQL, NoSQL,Apache spark, Fink, Databricks and Mfow.Prior experience with operationaizing data-driven pipeines for arge scae batch and stream processing anaytics soutionsGood to have experience with contributing to GitHub and open source initiatives or in research projects and/or participation in Kagge competitionsAbiity to quicky, efficienty, and effectivey define and prototype soutions with continua iteration within aggressive product deadines.Demonstrate strong communication and documentation skis for both technica and non-technica audiences. Preferred Quaifications: Search TechnoogiesQuery and Indexing content for Apache Sor, Eastic Search, etc.Proficiency in search query anguages (e.g., Lucene Query Syntax) and experience with data indexing and retrieva.Experience with machine earning modes and NLP techniques for search reevance and ranking.Famiiarity with vector search techniques and embedding modes (e.g., BERT, Word2Vec).Experience with reevance tuning using A/B testing frameworks.Big Data TechnoogiesApache Spark, Spark SQL, Hadoop, Hive, AirfowData Science Search TechnoogiesPersonaization and Recommendation modes, Learn to Rank (LTR)Preferred LanguagesPython, JavaDatabase TechnoogiesMS SQL Server patform, stored procedure programming experience using Transact SQL.Abiity to ead, train and mentor. About S&P Goba Market Inteigence At S&P Goba Market Inteigence, a division of S&P Goba we understand the importance of accurate, deep and insightfu information. Our team of experts deivers unrivaed insights and eading data and technoogy soutions, partnering with customers to expand their perspective, operate with confidence, andmake decisions with conviction.For more information, visit . Whats In It For You Our Purpose: Progress is not a sef-starter. It requires a catayst to be set in motion. Information, imagination, peope, technoogythe right combination can unock possibiity and change the word.Our word is in transition and getting more compex by the day. We push past expected observations and seek out new eves of understanding so that we can hep companies, governments and individuas make an impact on tomorrow. At S&P Goba we transform data into Essentia Inteigence, pinpointing risks and opening possibiities. We Acceerate Progress.
Posted 1 month ago
7.0 - 12.0 years
9 - 14 Lacs
Pune, Hinjewadi
Work from Office
Job Summary Synechron is seeking an experienced and technically proficient Senior PySpark Data Engineer to join our data engineering team. In this role, you will be responsible for developing, optimizing, and maintaining large-scale data processing solutions using PySpark. Your expertise will support our organizations efforts to leverage big data for actionable insights, enabling data-driven decision-making and strategic initiatives. Software Requirements Required Skills: Proficiency in PySpark Familiarity with Hadoop ecosystem components (e.g., HDFS, Hive, Spark SQL) Experience with Linux/Unix operating systems Data processing tools like Apache Kafka or similar streaming platforms Preferred Skills: Experience with cloud-based big data platforms (e.g., AWS EMR, Azure HDInsight) Knowledge of Python (beyond PySpark), Java or Scala relevant to big data applications Familiarity with data orchestration tools (e.g., Apache Airflow, Luigi) Overall Responsibilities Design, develop, and optimize scalable data processing pipelines using PySpark. Collaborate with data engineers, data scientists, and business analysts to understand data requirements and deliver solutions. Implement data transformations, aggregations, and extraction processes to support analytics and reporting. Manage large datasets in distributed storage systems, ensuring data integrity, security, and performance. Troubleshoot and resolve performance issues within big data workflows. Document data processes, architectures, and best practices to promote consistency and knowledge sharing. Support data migration and integration efforts across varied platforms. Strategic Objectives: Enable efficient and reliable data processing to meet organizational analytics and reporting needs. Maintain high standards of data security, compliance, and operational durability. Drive continuous improvement in data workflows and infrastructure. Performance Outcomes & Expectations: Efficient processing of large-scale data workloads with minimum downtime. Clear, maintainable, and well-documented code. Active participation in team reviews, knowledge transfer, and innovation initiatives. Technical Skills (By Category) Programming Languages: Required: PySpark (essential); Python (needed for scripting and automation) Preferred: Java, Scala Databases/Data Management: Required: Experience with distributed data storage (HDFS, S3, or similar) and data warehousing solutions (Hive, Snowflake) Preferred: Experience with NoSQL databases (Cassandra, HBase) Cloud Technologies: Required: Familiarity with deploying and managing big data solutions on cloud platforms such as AWS (EMR), Azure, or GCP Preferred: Cloud certifications Frameworks and Libraries: Required: Spark SQL, Spark MLlib (basic familiarity) Preferred: Integration with streaming platforms (e.g., Kafka), data validation tools Development Tools and Methodologies: Required: Version control systems (e.g., Git), Agile/Scrum methodologies Preferred: CI/CD pipelines, containerization (Docker, Kubernetes) Security Protocols: Optional: Basic understanding of data security practices and compliance standards relevant to big data management Experience Requirements Minimum of 7+ years of experience in big data environments with hands-on PySpark development. Proven ability to design and implement large-scale data pipelines. Experience working with cloud and on-premises big data architectures. Preference for candidates with domain-specific experience in finance, banking, or related sectors. Candidates with substantial related experience and strong technical skills in big data, even from different domains, are encouraged to apply. Day-to-Day Activities Develop, test, and deploy PySpark data processing jobs to meet project specifications. Collaborate in multi-disciplinary teams during sprint planning, stand-ups, and code reviews. Optimize existing data pipelines for performance and scalability. Monitor data workflows, troubleshoot issues, and implement fixes. Engage with stakeholders to gather new data requirements, ensuring solutions are aligned with business needs. Contribute to documentation, standards, and best practices for data engineering processes. Support the onboarding of new data sources, including integration and validation. Decision-Making Authority & Responsibilities: Identify performance bottlenecks and propose effective solutions. Decide on appropriate data processing approaches based on project requirements. Escalate issues that impact project timelines or data integrity. Qualifications Bachelors degree in Computer Science, Information Technology, or related field. Equivalent experience considered. Relevant certifications are preferred: Cloudera, Databricks, AWS Certified Data Analytics, or similar. Commitment to ongoing professional development in data engineering and big data technologies. Demonstrated ability to adapt to evolving data tools and frameworks. Professional Competencies Strong analytical and problem-solving skills, with the ability to model complex data workflows. Excellent communication skills to articulate technical solutions to non-technical stakeholders. Effective teamwork and collaboration in a multidisciplinary environment. Adaptability to new technologies and emerging trends in big data. Ability to prioritize tasks effectively and manage time in fast-paced projects. Innovation mindset, actively seeking ways to improve data infrastructure and processes.
Posted 1 month ago
5.0 - 10.0 years
10 - 12 Lacs
Chennai
Work from Office
Databricks developer with deep SQL expertise to support the development of scalable data pipelines and analytics workflows, will work closely with data engineers BIanalysts to prepare clean, query-optimized datasets for reporting and modeling.
Posted 1 month ago
5.0 - 7.0 years
15 - 25 Lacs
Mumbai, Delhi / NCR, Bengaluru
Work from Office
About the Role: We are seeking a skilled and experienced Data Engineer to join our remote team. The ideal candidate will have 5-7 years of professional experience working with Python, PySpark, SQL, and Spark SQL, and will play a key role in building scalable data pipelines, optimizing data workflows, and supporting data-driven decision-making across the organization. Key Responsibilities: Design, build, and maintain scalable and efficient data pipelines using PySpark and SQL. Develop and optimize Spark jobs for large-scale data processing. Collaborate with data scientists, analysts, and other engineers to ensure data quality and accessibility. Implement data integration from multiple sources into a unified data warehouse or lake. Monitor and troubleshoot data pipelines and ETL jobs for performance and reliability. Ensure best practices in data governance, security, and compliance. Create and maintain technical documentation related to data pipelines and infrastructure. Location-Delhi NCR,Bangalore,Chennai,Pune,Kolkata,Ahmedabad,Mumbai,Hyderabad,Remote
Posted 1 month ago
8.0 - 10.0 years
8 - 12 Lacs
Pune
Work from Office
Role Purpose The role incumbent is focused on implementation of roadmaps for business process analysis, data analysis, diagnosis of gaps, business requirements & functional definitions, best practices application, meeting facilitation, and contributes to project planning. Consultants are expected to contribute to solution building for the client & practice. The role holder can handle higher scale and complexity compared to a Consultant profile and is more proactive in client interactions. Do Assumes responsibilities as the main client contact leading engagement w/ 10-20% support from Consulting & Client Partners. Develops, assesses, and validates a clients business strategy, including industry and competitive positioning and strategic direction Develops solutions and services to suit clients business strategy Estimates scope and liability for delivery of the end product/solution Seeks opportunities to develop revenue in existing and new areas Leads an engagement and oversees others contributions at a customer end, such that customer expectations are met or exceeded. Drives Proposal creation and presales activities for the engagement; new accounts Contributes towards the development of practice policies, procedures, frameworks etc. Guides less experienced team members in delivering solutions. Leads efforts towards building go-to-market/ off the shelf / point solutions and process smethodologies for reuse Creates reusable IP from managed projects B?usiness System Analyst Skills required: Data warehousing, data analysis, ETL, SQL, spark SQL, data mapping, Azure, Databricks Pyspark and data bricks Experience in AML/ Banking/ Capital Market Experience with Agile methodology Plan, develop, test and deploy Experience with SQL server management tools and writing queries, with large SQL data marts, relational database experience. Profile data and prepare source to target mappings, map source data to the target tables for the multi hop architecture. Closely working with data modelers, data engineers and architect and product owners to identify any discrepancies and get it resolved. Strong knowledge of data extraction, design, load, and reporting solutions, strong ability to read SQL server SSIS packages, SQL stored procedures/functions and back trace views Work in the team of data designer/ developer to translate user and/or systems requirements into functional technical specifications. Work with business stakeholders and other SMEs to assess current capabilities, understand high-level business requirements and apply technical background/understanding in the development of System Requirements Specification (SRS) documents. Collaborate closely with Application Owners, Application Managers, and Solution Designers as the business/functional counterpart in solution identification and maintenance. Support testing teams in translating requirements and use cases into test conditions and expected results for product, performance, user acceptance, and operational acceptance testing; participate in the testing of developed systems/solutions. Act as a technical resource for business partners to ensure deliverables meet business and end-user requirements. Identifying the source data/source tables as per project specific use case and get that ingested into source raw zone. Raise Data access control requests and make source data available in analytical zone for data profiling and data analysis. Understanding the existing codes and data bricks workflows and leverage that for our use case. Mandatory Skills: Institutional Compliance. Experience: 8-10 Years.
Posted 1 month ago
5.0 - 10.0 years
10 - 15 Lacs
Pune, Bengaluru, Mumbai (All Areas)
Hybrid
Designation : Azure Data Engineer Experience : 5+ Years Location: Chennai, Bangalore, Pune, Mumbai Notice Period: Immediate Joiners/ Serving Notice Period Shift Timing: 3:30 PM IST to 12:30 AM IST Job Description : Azure Data Engineer: Must Have Azure Data Bricks, Azure Data Factory, Spark SQL with analytical knowledge Years 6-7 years of development experience in data engineering skills Strong experience in Spark. Understand complex data system by working closely with engineering and product teams Develop scalable and maintainable applications to extract, transform, and load data in various formats to SQL Server, Hadoop Data Lake or other data storage locations. Sincerely, Sonia HR Recruiter Talent Sketchers
Posted 2 months ago
5.0 - 10.0 years
15 - 22 Lacs
New Delhi, Chennai, Bengaluru
Work from Office
Seeking an experienced Data Engineer who can play a crucial role in the company's fintech data lake project. Technical/Functional Skills: Must have 5+ years of experience working in data warehousing systems Strong experience in Oracle Fusion ecosystem, with strong data-extracting experience using Oracle BICC/BIP. Must have good functional understanding of Fusion data structures. Must have strong and proven data engineering experience in big data Databricks environment Must have hands-on experience building data ingestion pipelines from Oracle Fusion Cloud to a Databricks environment Strong data transformation/ETL skills using Spark SQL, Pyspark, Unity Catalog working in Databricks Medallion architecture Capable of independently delivering work items and leading data discussions with Tech Leads & Business Analysts Nice to have: Experience with Fivetran or any equivalent data extraction tools is nice to have. Experience in supporting Splash report development activities is a plus. Prefer experience with Git, CI/CD tools, and code management processes The candidate is expected to: Follow engineering best practices, coding standards, and deployment processes. Troubleshoot any performance, system or data related issues, and work to ensure data consistency and integrity. Effectively communicate with users at all levels of the organization, both in written and verbal presentations. Effectively communicate with other data engineers, help other team members with design and implementation activities. Location: Delhi NCR,Bangalore,Chennai,Pune,Kolkata,Ahmedabad,Mumbai,Hyderabad
Posted 2 months ago
4.0 - 7.0 years
6 - 9 Lacs
Bengaluru
Work from Office
What this job involves: JLL, an international real estate management company, is seeking an Data Engineer to join our JLL Technologies Team. We are seeking candidates that are self-starters to work in a diverse and fast-paced environment that can join our Enterprise Data team. We are looking for a candidate that is responsible for designing and developing of data solutions that are strategic for the business using the latest technologies Azure Databricks, Python, PySpark, SparkSQL, Azure functions, Delta Lake, Azure DevOps CI/CD. Responsibilities Design, Architect, and Develop solutions leveraging cloud big data technology to ingest, process and analyze large, disparate data sets to exceed business requirements. Design & develop data management and data persistence solutions for application use cases leveraging relational, non-relational databases and enhancing our data processing capabilities. Develop POCs to influence platform architects, product managers and software engineers to validate solution proposals and migrate. Develop data lake solution to store structured and unstructured data from internal and external sources and provide technical guidance to help migrate colleagues to modern technology platform. Contribute and adhere to CI/CD processes, development best practices and strengthen the discipline in Data Engineering Org. Develop systems that ingest, cleanse and normalize diverse datasets, develop data pipelines from various internal and external sources and build structure for previously unstructured data. Using PySpark and Spark SQL, extract, manipulate, and transform data from various sources, such as databases, data lakes, APIs, and files, to prepare it for analysis and modeling. Build and optimize ETL workflows using Azure Databricks and PySpark. This includes developing efficient data processing pipelines, data validation, error handling, and performance tuning. Perform the unit testing, system integration testing, regression testing and assist with user acceptance testing. Articulates business requirements in a technical solution that can be designed and engineered. Consults with the business to develop documentation and communication materials to ensure accurate usage and interpretation of JLL data. Implement data security best practices, including data encryption, access controls, and compliance with data protection regulations. Ensure data privacy, confidentiality, and integrity throughout the data engineering processes. Performs data analysis required to troubleshoot data related issues and assist in the resolution of data issues. Experience & Education Minimum of 4 years of experience as a data developer using Python, PySpark, Spark Sql, ETL knowledge, SQL Server, ETL Concepts. Bachelors degree in Information Science, Computer Science, Mathematics, Statistics or a quantitative discipline in science, business, or social science. Experience in Azure Cloud Platform, Databricks, Azure storage. Effective written and verbal communication skills, including technical writing. Excellent technical, analytical and organizational skills. Technical Skills & Competencies Experience handling un-structured, semi-structured data, working in a data lake environment, leveraging data streaming and developing data pipelines driven by events/queues Hands on Experience and knowledge on real time/near real time processing and ready to code Hands on Experience in PySpark, Databricks, and Spark Sql. Knowledge on json, Parquet and Other file format and work effectively with them No Sql Databases Knowledge like Hbase, Mongo, Cosmos etc. Preferred Cloud Experience on Azure or AWS Python-spark, Spark Streaming, Azure SQL Server, Cosmos DB/Mongo DB, Azure Event Hubs, Azure Data Lake Storage, Azure Search etc. Team player, Reliable, self-motivated, and self-disciplined individual capable of executing on multiple projects simultaneously within a fast-paced environment working with cross functional teams.
Posted 2 months ago
5.0 - 10.0 years
14 - 19 Lacs
Bengaluru, Delhi / NCR, Mumbai (All Areas)
Work from Office
Role & responsibilities Urgent Hiring for one of the reputed MNC Exp - 5+ Years Location - Pan India Immediate Joiners only Snowflake developer , Pyspark , Python , API, CI/CD , Cloud services ,Azure , Azure Devops Subject: Fw : TMNA SNOWFLAKE POSITION Please share profiles for Snowflake developers having strong Pyspark experience Job Description: Strong hands-on experience in Snowflake development including Streams, Tasks, and Time Travel Deep understanding of Snowpark for Python and its application for data engineering workflows Proficient in PySpark , Spark SQL, and distributed data processing Experience with API development . Proficiency in cloud services (preferably Azure, but AWS/GCP also acceptable) Solid understanding of CI/CD practices and tools like Azure DevOps, GitHub Actions, GitLab, or Jenkins for snowflake. Knowledge of Delta Lake, Data Lakehouse principles, and schema evolution is a plus Preferred candidate profile
Posted 2 months ago
5 - 10 years
15 - 20 Lacs
Bengaluru
Work from Office
Role & responsibilities Urgent hiring for one of the reputed MNC Data Analyst Exp - 5 - 10 Years Only immediate joiners Location - Bangalore JD: Data Analyst Mandatory SKILLS 1. SQL : Proficient in database object creation including tables, views, indexes etc. Strong expertise in SQL queries ,Stored procedure & Function etc. Experienced in performance tuning & optimization techniques. 2.PowerBI : Proficiency in Power BI development, including report and dashboard creation Design, develop, and maintain complex Power BI data models, ensuring data integrity and consistency. Comprehensive understanding of data modeling and data visualization concepts Identify and resolve performance bottlenecks in Power BI reports and data models. Experience with Power Query & DAX 3. Problem-Solving Skills: Strong analytical and problem-solving skills to identify and resolve data-related issues. 4.Python : Strong proficiency in Python programming. 5.PySpark: Extensive experience with PySpark, including DataFrames & SparkSQL. Preferred candidate profile
Posted 2 months ago
6 - 8 years
8 - 10 Lacs
Hyderabad
Work from Office
Responsibilities: Solve problems, analyze and isolate issues. Provide technical guidance and mentoring to the team and help them adopt change as new processes are introduced. Champion best practices and serve as a subject matter authority. Develop solutions to develop/support key business needs. Engineer components and common services based on standard development models, languages and tools Produce system design documents and lead technical walkthroughs Produce high quality code Collaborate effectively with technical and non-technical partners As a team-member should continuously improve the architecture Basic Qualifications: 6-8 years of experience in application development using Java or Dot Net (.NET) Technologies Bachelor's /Masters degree in computer science, Information Systems or equivalent. Knowledge of object-oriented design, .NET framework and design patterns. Command of essential technologies: Java and/or C#, ASP.NET Experience with developing solutions involving relational database technologies: SQL, stored procedures Proficient with software development lifecycle (SDLC) methodologies like Agile, Test-Driven Development. Good communication and collaboration skills Preferred Qualifications: Search Technologies: Query and indexing content for Apache Solr, Elastic Search Big Data Technologies: Apache Spark, Spark SQL, Hadoop, Hive, Airflow Data Science Search Technologies: Personalization and Recommendation models, Learn to Rank (LTR) Preferred Languages: Python Database Technologies: MS SQL Server platform, stored procedure programming experience using Transact SQL. Ability to lead, train and mentor.
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough