Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 10.0 years
4 - 8 Lacs
Hyderabad
Work from Office
Seeking a skilled Data Engineer to work on cloud-based data pipelines and analytics platforms. The ideal candidate will have hands-on experience in PySpark and AWS, with proficiency in designing Data Lakes and working with modern data orchestration tools. Data Engineer to work on cloud-based data pipelines and analytics platforms PySpark and AWS, with proficiency in designing Data Lakes working with modern data orchestration tools
Posted 1 month ago
8.0 - 13.0 years
10 - 15 Lacs
Gurugram
Work from Office
Required skillsStrong working knowledge of modern programming languages, ETL/Data Integration tools (preferably SnapLogic) and understanding of Cloud Concepts. SSL/TLS, SQL, REST, JDBC, JavaScript, JSON Has Strong hands-on experience in Snaplogic Design/Development. Has good working experience using various snaps for JDBC, SAP, Files, Rest, SOAP, etc. Good to have the ability to build complex mappings with JSON path expressions, and Python scripting. Good to have experience in ground plex and cloud plex integrations. Has Strong hands-on experience in Snaplogic Design/Development. Has good working experience using various snaps for JDBC, SAP, Files, Rest, SOAP, etc. Should be able to deliver the project by leading a team of the 6-8member team. Should have had experience in integration projects with heterogeneous landscapes. Good to have the ability to build complex mappings with JSON path expressions, flat files and cloud. Good to have experience in ground plex and cloud plex integrations. Experience in one or more RDBMS (Oracle, DB2, and SQL Server, PostgreSQL and RedShift) Real-time experience working in OLAP & OLTP database models (Dimensional models).
Posted 1 month ago
0.0 - 5.0 years
4 - 7 Lacs
Bengaluru
Work from Office
Urgent Opening for ETL Testing Opening - Software - Bangalore Posted On 17th Jun 2016 12:12 PM Location Bangalore Role / Position ETL Testing Experience (required) 4 plus years Description Our client is a start up founded in 2009, develops products and solutions in Customer Success Management (CSM). They've received a $54 million venture funding since 201 2 and are expanding their team. They've a 130+ member team (Hyderabad + US). Position: ETL Testing Location: Bangalore Experience: 4 plus years(exp in SQL Queries) Job description Reading and writing data to S3 buckets/MongoDB/Redshift Data migration from one db to another Checking the DB performance Very good at writing SQL queries Good understanding of SQL best practices Preparing large data sets for Performance testing Security aspects in DB(add on) Understanding of different data sources - CSV, JSON, XML If interested, please share the update profile along with ctc details Send Resumes to kishore.expertiz@gmail.com -->Upload Resume
Posted 1 month ago
8.0 - 10.0 years
12 - 16 Lacs
Bengaluru
Work from Office
Urgent Opening for Solution Architect- Data Warehouse-Bangalore Posted On 04th Jul 2019 12:25 PM Location Bangalore Role / Position Solution Architect- Data Warehouse Experience (required) 8 Plus years Description 8-10 years experience in consulting or IT experience supporting Enterprise Data Warehouses & Business Intelligence environments, including experience with data warehouse architecture & design, ETL design/development, and Analytics. Responsiblefor defining the data strategy andfor ensuring that the programs and project align to that strategy. Provides thought leadership in following areas: -Data Access, Data Integration, Data Visualization, Data Modeling, Data Quality and Metadata management -Analytics, Data Discovery, Use Statistical methods, Database Design and Implementation Expertise in Database Appliance, RDBMS, Teradata,Netezza Hands-on experience with data architecting, data mining, large-scale data modeling, and business requirements gathering/analysis. Experience in ETL and Data Migration Tools. Direct experience in implementing enterprise data management processes, procedures, and decision support. Responsiblefor defining the data strategy andfor ensuring that the programs and project align to that strategy. Strong understanding of relational data structures, theories, principles, and practices. Strong familiarity with metadata management and associated processes. Hands-on knowledge of enterprise repository tools, data modeling tools, data mapping tools, and data profiling tools. Demonstrated expertise with repository creation, and data and information system life cycle methodologies. Experience with business requirements analysis, entity relationship planning, database design, reporting structures, and so on. Ability to manage data and metadata migration. Experience with data processing flowcharting techniques. Hands on Experience in Big Data Technologies(5 years)-Hadoop, MapReduce, MongoDB, and Integration with the Legacy environmentswould be preferred . Experience with Spark using Scala or Python is a big plus Experience in Cloud Technologies(primarily in AWS, Azure) and integration with on premise existing Data warehouse technologies. Have good knowledge on S3, Redshift, Blob Storage, Presto DB etc. Attitude to learn and adopt emerging technologies. Send Resumes to girish.expertiz@gmail.com -->Upload Resume
Posted 1 month ago
15.0 - 20.0 years
4 - 8 Lacs
Hyderabad
Work from Office
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Apache Spark Good to have skills : AWS GlueMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and optimize data workflows, ensuring that the data infrastructure supports the organization's analytical needs effectively. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge in data engineering.- Continuously evaluate and improve data processing workflows to enhance efficiency and performance. Professional & Technical Skills: - Must To Have Skills: Proficiency in Apache Spark.- Good To Have Skills: Experience with AWS Glue.- Strong understanding of data pipeline architecture and design.- Experience with ETL processes and data integration techniques.- Familiarity with data quality frameworks and best practices. Additional Information:- The candidate should have minimum 5 years of experience in Apache Spark.- This position is based at our Hyderabad office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 1 month ago
15.0 - 25.0 years
15 - 20 Lacs
Bengaluru
Work from Office
Project Role : IoT Architect Project Role Description : Design end-to-end IoT platform architecture solutions, including data ingestion, data processing, and analytics across different vendor platforms for highly interconnected device workloads at scale. Must have skills : Data Architecture Principles Good to have skills : NAMinimum 15 year(s) of experience is required Educational Qualification : 15 years full time education Summary We are seeking a highly skilled and experienced Industrial Data Architect with a proven track record in providing functional and/or technical expertise to plan, analyze, define and support the delivery of future functional and technical capabilities for an application or group of applications. Well versed with OT data quality, Data modelling, data governance, data contextualization, database design, and data warehousing. Roles & Responsibilities:1. Industrial Data Architect will be responsible for developing and overseeing the industrial data architecture strategies to support advanced data analytics, business intelligence, and machine learning initiatives. This role involves collaborating with various teams to design and implement efficient, scalable, and secure data solutions for industrial operations. 2. Focused on designing, building, and managing the data architecture of industrial systems. 3. Assist in facilitating impact assessment efforts and in producing and reviewing estimates for client work requests. 4. Own the offerings and assets on key components of data supply chain, data governance, curation, data quality and master data management, data integration, data replication, data virtualization. 5. Create scalable and secure data structures, integrate with existing systems, and ensure efficient data flow. Professional & Technical Skills: 1. Must have Skills: Domain knowledge in areas of Manufacturing IT OT in one or more of the following verticals Automotive, Discrete Manufacturing, Consumer Packaged Goods, Life Science. 2. Data Modeling and Architecture:Proficiency in data modeling techniques (conceptual, logical, and physical models). 3. Knowledge of database design principles and normalization. 4. Experience with data architecture frameworks and methodologies (e.g., TOGAF). 5. Database Technologies:Relational Databases:Expertise in SQL databases such as MySQL, PostgreSQL, Oracle, and Microsoft SQL Server. 6. NoSQL Databases:Experience with at least one of the NoSQL databases like MongoDB, Cassandra, and Couchbase for handling unstructured data. 7. Graph Databases:Proficiency with at least one of the graph databases such as Neo4j, Amazon Neptune, or ArangoDB. Understanding of graph data models, including property graphs and RDF (Resource Description Framework). 8. Query Languages:Experience with at least one of the query languages like Cypher (Neo4j), SPARQL (RDF), or Gremlin (Apache TinkerPop). Familiarity with ontologies, RDF Schema, and OWL (Web Ontology Language). Exposure to semantic web technologies and standards. 9. Data Integration and ETL (Extract, Transform, Load):Proficiency in ETL tools and processes (e.g., Talend, Informatica, Apache NiFi). 10. Experience with data integration tools and techniques to consolidate data from various sources. 11. IoT and Industrial Data Systems:Familiarity with Industrial Internet of Things (IIoT) platforms and protocols (e.g., MQTT, OPC UA). 12. Experience with IoT data platforms like AWS IoT, Azure IoT Hub, and Google Cloud IoT Core. 13. Experience working with one or more of Streaming data platforms like Apache Kafka, Amazon Kinesis, Apache Flink 14. Ability to design and implement real-time data pipelines. Familiarity with processing frameworks such as Apache Storm, Spark Streaming, or Google Cloud Dataflow. 15. Understanding event-driven design patterns and practices. Experience with message brokers like RabbitMQ or ActiveMQ. 16. Exposure to the edge computing platforms like AWS IoT Greengrass or Azure IoT Edge 17. AI/ML, GenAI:Experience working on data readiness for feeding into AI/ML/GenAI applications 18. Exposure to machine learning frameworks such as TensorFlow, PyTorch, or Keras. 19. Cloud Platforms:Experience with cloud data services from at least one of the providers like AWS (Amazon Redshift, AWS Glue), Microsoft Azure (Azure SQL Database, Azure Data Factory), and Google Cloud Platform (BigQuery, Dataflow). 20. Data Warehousing and BI Tools:Expertise in data warehousing solutions (e.g., Snowflake, Amazon Redshift, Google BigQuery). 21. Proficiency with Business Intelligence (BI) tools such as Tableau, Power BI, and QlikView. 22. Data Governance and Security:Understanding data governance principles, data quality management, and metadata management. 23. Knowledge of data security best practices, compliance standards (e.g., GDPR, HIPAA), and data masking techniques. 24. Big Data Technologies:Experience with big data platforms and tools such as Hadoop, Spark, and Apache Kafka. 25. Understanding distributed computing and data processing frameworks. Additional Info1. A minimum of 15-18 years of progressive information technology experience is required. 2. This position is based at Bengaluru location. 3. A 15 years full-time education is required. 4. AWS Certified Data Engineer Associate / Microsoft Certified:Azure Data Engineer Associate / Google Cloud Certified Professional Data Engineer certification is mandatory Qualification 15 years full time education
Posted 1 month ago
3.0 - 8.0 years
4 - 8 Lacs
Bengaluru
Work from Office
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Apache Spark Good to have skills : AWS GlueMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to effectively migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and contribute to the overall data strategy of the organization, ensuring that data solutions are efficient, scalable, and aligned with business objectives. You will also monitor and optimize existing data processes to enhance performance and reliability, making data accessible and actionable for stakeholders. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Collaborate with data architects and analysts to design data models that meet business needs.- Develop and maintain documentation for data processes and workflows to ensure clarity and compliance. Professional & Technical Skills: - Must To Have Skills: Proficiency in Apache Spark.- Good To Have Skills: Experience with AWS Glue.- Strong understanding of data processing frameworks and methodologies.- Experience in building and optimizing data pipelines for performance and scalability.- Familiarity with data warehousing concepts and best practices. Additional Information:- The candidate should have minimum 3 years of experience in Apache Spark.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 1 month ago
15.0 - 20.0 years
15 - 20 Lacs
Bengaluru
Work from Office
Project Role : IoT Architect Project Role Description : Design end-to-end IoT platform architecture solutions, including data ingestion, data processing, and analytics across different vendor platforms for highly interconnected device workloads at scale. Must have skills : Data Architecture Principles Good to have skills : NAMinimum 12 year(s) of experience is required Educational Qualification : 15 years full time education Summary We are seeking a highly skilled and experienced Industrial Data Architect with a proven track record in providing functional and/or technical expertise to plan, analyze, define and support the delivery of future functional and technical capabilities for an application or group of applications. Well versed with OT data quality, Data modelling, data governance, data contextualization, database design, and data warehousing. Roles & Responsibilities:1. Industrial Data Architect will be responsible for developing and overseeing the industrial data architecture strategies to support advanced data analytics, business intelligence, and machine learning initiatives. This role involves collaborating with various teams to design and implement efficient, scalable, and secure data solutions for industrial operations. 2. Focused on designing, building, and managing the data architecture of industrial systems. 3. Assist in facilitating impact assessment efforts and in producing and reviewing estimates for client work requests. 4. Own the offerings and assets on key components of data supply chain, data governance, curation, data quality and master data management, data integration, data replication, data virtualization. 5. Create scalable and secure data structures, integrate with existing systems, and ensure efficient data flow. Professional & Technical Skills: 1. Must have Skills: Domain knowledge in areas of Manufacturing IT OT in one or more of the following verticals Automotive, Discrete Manufacturing, Consumer Packaged Goods, Life Science. 2. Data Modeling and Architecture:Proficiency in data modeling techniques (conceptual, logical, and physical models). 3. Knowledge of database design principles and normalization. 4. Experience with data architecture frameworks and methodologies (e.g., TOGAF). 5. Database Technologies:Relational Databases:Expertise in SQL databases such as MySQL, PostgreSQL, Oracle, and Microsoft SQL Server. 6. NoSQL Databases:Experience with at least one of the NoSQL databases like MongoDB, Cassandra, and Couchbase for handling unstructured data. 7. Graph Databases:Proficiency with at least one of the graph databases such as Neo4j, Amazon Neptune, or ArangoDB. Understanding of graph data models, including property graphs and RDF (Resource Description Framework). 8. Query Languages:Experience with at least one of the query languages like Cypher (Neo4j), SPARQL (RDF), or Gremlin (Apache TinkerPop). Familiarity with ontologies, RDF Schema, and OWL (Web Ontology Language). Exposure to semantic web technologies and standards. 9. Data Integration and ETL (Extract, Transform, Load):Proficiency in ETL tools and processes (e.g., Talend, Informatica, Apache NiFi). 10. Experience with data integration tools and techniques to consolidate data from various sources. 11. IoT and Industrial Data Systems:Familiarity with Industrial Internet of Things (IIoT) platforms and protocols (e.g., MQTT, OPC UA). 12. Experience with IoT data platforms like AWS IoT, Azure IoT Hub, and Google Cloud IoT Core. 13. Experience working with one or more of Streaming data platforms like Apache Kafka, Amazon Kinesis, Apache Flink 14. Ability to design and implement real-time data pipelines. Familiarity with processing frameworks such as Apache Storm, Spark Streaming, or Google Cloud Dataflow. 15. Understanding event-driven design patterns and practices. Experience with message brokers like RabbitMQ or ActiveMQ. 16. Exposure to the edge computing platforms like AWS IoT Greengrass or Azure IoT Edge 17. AI/ML, GenAI:Experience working on data readiness for feeding into AI/ML/GenAI applications 18. Exposure to machine learning frameworks such as TensorFlow, PyTorch, or Keras. 19. Cloud Platforms:Experience with cloud data services from at least one of the providers like AWS (Amazon Redshift, AWS Glue), Microsoft Azure (Azure SQL Database, Azure Data Factory), and Google Cloud Platform (BigQuery, Dataflow). 20. Data Warehousing and BI Tools:Expertise in data warehousing solutions (e.g., Snowflake, Amazon Redshift, Google BigQuery). 21. Proficiency with Business Intelligence (BI) tools such as Tableau, Power BI, and QlikView. 22. Data Governance and Security:Understanding data governance principles, data quality management, and metadata management. 23. Knowledge of data security best practices, compliance standards (e.g., GDPR, HIPAA), and data masking techniques. 24. Big Data Technologies:Experience with big data platforms and tools such as Hadoop, Spark, and Apache Kafka. 25. Understanding distributed computing and data processing frameworks. Additional Info1. A minimum of 15-18 years of progressive information technology experience is required. 2. This position is based at Bengaluru location. 3. A 15 years full-time education is required. 4. AWS Certified Data Engineer Associate / Microsoft Certified:Azure Data Engineer Associate / Google Cloud Certified Professional Data Engineer certification is mandatory Qualification 15 years full time education
Posted 1 month ago
2.0 - 5.0 years
4 - 7 Lacs
Bengaluru
Work from Office
Urgent Opening for AWS developer- Bangalore Posted On 01st Feb 2020 12:16 PM Location Bangalore Role / Position AWS Developer Experience (required) 2-5 yrs Description Our Client is a leading Big Data analyticscompany having its headquarter in Bangalore DesignationAWS Developer LocationBangalore Experience 2-5 yrs AWS certified with solid 1+ years experiencein Production environment Candidate Profile AWS certified candidate with 1+ yr experience of working in production environment is the specific ask. Within AWS, the key needs are working knowledge on Kinesis (streaming data), Redshift / RDS ( Querying), Dynamo DBNo SQL DB. Send Resumes to girish.expertiz@gmail.com -->Upload Resume
Posted 1 month ago
6.0 - 8.0 years
8 - 10 Lacs
Hyderabad
Work from Office
Urgent Requirement for Big Data, Notice Period Immediate Location Hyderabad/Pune Employment Type C2H Primary Skills 6-8yrs of Experience in working as bigdata developer/supporting environemnts Strong knowledge in Unix/BigData Scripting Strong understanding of BigData (CDP/Hive) Environment Hands-on with GitHub and CI-CD implementations. Attitude to learn understand ever task doing with reason Ability to work independently on specialized assignments within the context of project deliverable Take ownership of providing solutions and tools that iteratively increase engineering efficiencies. Excellent communication Skills & team player Good to have hadoop, Control-M Tooling knowledge. Good to have Automation experience, knowledge of any Monitoring Tools. Role You will work with team handling application developed using Hadoop/CDP, Hive. You will work within the Data Engineering team and with the Lead Hadoop Data Engineer and Product Owner. You are expected to support existing application as well as design and build new Data Pipelines. You are expected to support Evergreening or upgrade activities of CDP/SAS/Hive You are expected to participate in the service management if application Support issue resolution and improve processing performanceavoid issue reoccurring Ensure the use of Hive, Unix Scripting, Control-M reduces lead time to delivery Support application in UK shift as well as on-call support over night/weekend This is mandatory
Posted 1 month ago
3.0 - 5.0 years
5 - 7 Lacs
Mumbai
Work from Office
The ideal candidate must possess in-depth functional knowledge of the process area and apply it to operational scenarios to provide effective solutions. The role enables to identify discrepancies and propose optimal solutions by using a logical, systematic, and sequential methodology. It is vital to be open-minded towards inputs and views from team members and to effectively lead, control, and motivate groups towards company objects. Additionally, candidate must be self-directed, proactive, and seize every opportunity to meet internal and external customer needs and achieve customer satisfaction by effectively auditing processes, implementing best practices and process improvements, and utilizing the frameworks and tools available. Goals and thoughts must be clearly and concisely articulated and conveyed, verbally and in writing, to clients, colleagues, subordinates, and supervisors. Associate Process Manager Roles and responsibilities: Utilize Adobe Analytics to collect, analyze, and interpret data related to website traffic, user behavior, and digital marketing campaigns. Develop and maintain custom reports, dashboards, and visualizations to communicate insights effectively to stakeholders. Collaborate with stakeholders to define key performance indicators (KPIs) and develop measurement frameworks to track and evaluate business performance. Identify opportunities for optimization and improvement across various aspects of the business, including website usability, customer journey, and marketing effectiveness. Conduct in-depth analysis using Python programming language to uncover actionable insights and drive strategic decision-making. Stay abreast of industry trends, emerging technologies, and best practices in digital analytics, data science, and related fields. Manage project timelines, resources, and deliverables to ensure successful execution of analytics initiatives. Lead and mentor a team of analysts, providing guidance and support to drive professional growth and development Technical and Functional Skills: Graduate with a minimum of 3 to 5 years of proven experience with data visualization tools such as Tableau, Power BI, or similar. Should have hands on experience with Adobe Analytics, Reporting, Python. Ability to effectively manage multiple work assignments while being able to shift priorities Domain knowledge of various industries such as Banking, Retail, ecommerce etc. Excellent verbal and written communication skills Strong analytical, quantitative and problem solving skills Ability to effectively manage multiple work assignments while being able to shift priorities.
Posted 1 month ago
5.0 - 10.0 years
7 - 12 Lacs
Pune
Work from Office
The candidate must possess knowledge relevant to the functional area, and act as a subject matter expert in providing advice in the area of expertise, and also focus on continuous improvement for maximum efficiency. It is vital to focus on the high standard of delivery excellence, provide top-notch service quality and develop successful long-term business partnerships with internal/external customers by identifying and fulfilling customer needs. He/she should be able to break down complex problems into logical and manageable parts in a systematic way, and generate and compare multiple options, and set priorities to resolve problems. The ideal candidate must be proactive, and go beyond expectations to achieve job results and create new opportunities. He/she must positively influence the team, motivate high performance, promote a friendly climate, give constructive feedback, provide development opportunities, and manage career aspirations of direct reports. Communication skills are key here, to explain organizational objectives, assignments, and the big picture to the team, and to articulate team vision and clear objectives. Process ManagerRoles and responsibilities: Designing and implementing scalable, reliable, and maintainable data architectures on AWS. Developing data pipelines to extract, transform, and load (ETL) data from various sources into AWS environments. Creating and optimizing data models and schemas for performance and scalability using AWS services like Redshift, Glue, Athena, etc. Integrating AWS data solutions with existing systems and third-party services. Monitoring and optimizing the performance of AWS data solutions, ensuring efficient query execution and data retrieval. Implementing data security and encryption best practices in AWS environments. Documenting data engineering processes, maintaining data pipeline infrastructure, and providing support as needed. Working closely with cross-functional teams including data scientists, analysts, and stakeholders to understand data requirements and deliver solutions. Technical and Functional Skills: Typically, a bachelors degree in Computer Science, Engineering, or a related field is required, along with 5+ years of experience in data engineering and AWS cloud environments. Strong experience with AWS data services such as S3, EC2, Redshift, Glue, Athena, EMR, etc Proficiency in programming languages commonly used in data engineering such as Python, SQL, Scala, or Java. Experience in designing, implementing, and optimizing data warehouse solutions on Snowflake/ Amazon Redshift. Familiarity with ETL tools and frameworks (e.g., Apache Airflow, AWS Glue) for building and managing data pipelines. Knowledge of database management systems (e.g., PostgreSQL, MySQL, Amazon Redshift) and data lake concepts. Understanding of big data technologies such as Hadoop, Spark, Kafka, etc., and their integration with AWS. Proficiency in version control tools like Git for managing code and infrastructure as code (e.g., CloudFormation, Terraform). Ability to analyze complex technical problems and propose effective solutions. Strong verbal and written communication skills for documenting processes and collaborating with team members and stakeholders.
Posted 1 month ago
5.0 - 10.0 years
7 - 12 Lacs
Pune
Work from Office
The candidate must possess knowledge relevant to the functional area, and act as a subject matter expert in providing advice in the area of expertise, and also focus on continuous improvement for maximum efficiency. It is vital to focus on the high standard of delivery excellence, provide top-notch service quality and develop successful long-term business partnerships with internal/external customers by identifying and fulfilling customer needs. He/she should be able to break down complex problems into logical and manageable parts in a systematic way, and generate and compare multiple options, and set priorities to resolve problems. The ideal candidate must be proactive, and go beyond expectations to achieve job results and create new opportunities. He/she must positively influence the team, motivate high performance, promote a friendly climate, give constructive feedback, provide development opportunities, and manage career aspirations of direct reports. Communication skills are key here, to explain organizational objectives, assignments, and the big picture to the team, and to articulate team vision and clear objectives. Process ManagerRoles and responsibilities: Designing and implementing scalable, reliable, and maintainable data architectures on AWS. Developing data pipelines to extract, transform, and load (ETL) data from various sources into AWS environments. Creating and optimizing data models and schemas for performance and scalability using AWS services like Redshift, Glue, Athena, etc. Integrating AWS data solutions with existing systems and third-party services. Monitoring and optimizing the performance of AWS data solutions, ensuring efficient query execution and data retrieval. Implementing data security and encryption best practices in AWS environments. Documenting data engineering processes, maintaining data pipeline infrastructure, and providing support as needed. Working closely with cross-functional teams including data scientists, analysts, and stakeholders to understand data requirements and deliver solutions. Technical and Functional Skills: Typically, a bachelors degree in Computer Science, Engineering, or a related field is required, along with 5+ years of experience in data engineering and AWS cloud environments. Strong experience with AWS data services such as S3, EC2, Redshift, Glue, Athena, EMR, etc Proficiency in programming languages commonly used in data engineering such as Python, SQL, Scala, or Java. Experience in designing, implementing, and optimizing data warehouse solutions on Snowflake/ Amazon Redshift. Familiarity with ETL tools and frameworks (e.g., Apache Airflow, AWS Glue) for building and managing data pipelines. Knowledge of database management systems (e.g., PostgreSQL, MySQL, Amazon Redshift) and data lake concepts. Understanding of big data technologies such as Hadoop, Spark, Kafka, etc., and their integration with AWS. Proficiency in version control tools like Git for managing code and infrastructure as code (e.g., CloudFormation, Terraform). Ability to analyze complex technical problems and propose effective solutions. Strong verbal and written communication skills for documenting processes and collaborating with team members and stakeholders.
Posted 1 month ago
5.0 - 10.0 years
7 - 12 Lacs
Pune
Work from Office
Process Manager - AWS Data Engineer Mumbai/Pune| Full-time (FT) | Technology Services Shift Timings - EMEA(1pm-9pm)|Management Level - PM| Travel - NA The ideal candidate must possess in-depth functional knowledge of the process area and apply it to operational scenarios to provide effective solutions. The role enables to identify discrepancies and propose optimal solutions by using a logical, systematic, and sequential methodology. It is vital to be open-minded towards inputs and views from team members and to effectively lead, control, and motivate groups towards company objects. Additionally, candidate must be self-directed, proactive, and seize every opportunity to meet internal and external customer needs and achieve customer satisfaction by effectively auditing processes, implementing best practices and process improvements, and utilizing the frameworks and tools available. Goals and thoughts must be clearly and concisely articulated and conveyed, verbally and in writing, to clients, colleagues, subordinates, and supervisors. Process Manager Roles and responsibilities: Understand clients requirement and provide effective and efficient solution in AWS using Snowflake. Assembling large, complex sets of data that meet non-functional and functional business requirements Using Snowflake / Redshift Architect and design to create data pipeline and consolidate data on data lake and Data warehouse. Demonstrated strength and experience in data modeling, ETL development and data warehousing concepts Understanding data pipelines and modern ways of automating data pipeline using cloud based Testing and clearly document implementations, so others can easily understand the requirements, implementation, and test conditions Perform data quality testing and assurance as a part of designing, building and implementing scalable data solutions in SQL Technical and Functional Skills: AWS ServicesStrong experience with AWS data services such as S3, EC2, Redshift, Glue, Athena, EMR, etc. Programming LanguagesProficiency in programming languages commonly used in data engineering such as Python, SQL, Scala, or Java. Data WarehousingExperience in designing, implementing, and optimizing data warehouse solutions on Snowflake/ Amazon Redshift. ETL ToolsFamiliarity with ETL tools and frameworks (e.g., Apache Airflow, AWS Glue) for building and managing data pipelines. Database ManagementKnowledge of database management systems (e.g., PostgreSQL, MySQL, Amazon Redshift) and data lake concepts. Big Data TechnologiesUnderstanding of big data technologies such as Hadoop, Spark, Kafka, etc., and their integration with AWS. Version ControlProficiency in version control tools like Git for managing code and infrastructure as code (e.g., CloudFormation, Terraform). Problem-solving Skills: Ability to analyze complex technical problems and propose effective solutions. Communication Skills: Strong verbal and written communication skills for documenting processes and collaborating with team members and stakeholders. Education and ExperienceTypically, a bachelors degree in Computer Science, Engineering, or a related field is required, along with 5+ years of experience in data engineering and AWS cloud environments. About eClerx eClerx is a global leader in productized services, bringing together people, technology and domain expertise to amplify business results. Our mission is to set the benchmark for client service and success in our industry. Our vision is to be the innovation partner of choice for technology, data analytics and process management services. Since our inception in 2000, we've partnered with top companies across various industries, including financial services, telecommunications, retail, and high-tech. Our innovative solutions and domain expertise help businesses optimize operations, improve efficiency, and drive growth. With over 18,000 employees worldwide, eClerx is dedicated to delivering excellence through smart automation and data-driven insights. At eClerx, we believe in nurturing talent and providing hands-on experience. About eClerx Technology eClerxs Technology Group collaboratively delivers Analytics, RPA, AI, and Machine Learning digital technologies that enable our consultants to help businesses thrive in a connected world. Our consultants and specialists partner with our global clients and colleagues to build and implement digital solutions through a broad spectrum of activities. To know more about us, visit https://eclerx.com eClerx is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability or protected veteran status, or any other legally protected basis, per applicable law
Posted 1 month ago
5.0 - 10.0 years
8 - 12 Lacs
Hyderabad
Work from Office
Immediate Openings on Data Scientist _India Experience : 5 + Skill:- Data Scientist Location : India Notice Period :- Immediate . Employment Type: Contract . Job Description: Strong problem-solving skills with an emphasis on product development. Experience using statistical computer languages (R, Python, SLQ, etc.) to manipulate data and draw insights from large data sets. Experience working with and creating data architectures. Knowledge of a variety of machine learning techniques (clustering, decision tree learning, artificial neural networks, etc.) and their real-world advantages/drawbacks. Knowledge of advanced statistical techniques and concepts (regression, properties of distributions, statistical tests and proper usage, etc.) and experience with applications. Excellent written and verbal communication skills for coordinating across teams. A drive to learn and master new technologies and techniques. Were looking for someone with 5-7 years of experience manipulating data sets and building statistical models, has a Masters in Statistics, Mathematics, Computer Science or another quantitative field, and is familiar with the following software/tools: Coding knowledge and experience with several languages: C, C++, Java, JavaScript, etc. Knowledge and experience in statistical and data mining techniques: GLM/Regression, Random Forest, Boosting, Trees, text mining, social network analysis, etc. Experience querying databases and using statistical computer languages: R, Python, SLQ, etc. Experience using web services: Redshift, S3, Spark, etc. Experience creating and using advanced machine learning algorithms and statistics: regression, simulation, scenario analysis, modeling, clustering, decision trees, neural networks, etc. Experience analyzing data from 3rd party providers: Google Analytics, Crimson Hexagon, Facebook Insights, etc. Experience visualizing/presenting data for stakeholders using different tools. Working knowledge of message queuing (Kafka or Google Pub/Sub), stream processing, and highly scalable data stores.
Posted 1 month ago
5.0 - 10.0 years
9 - 13 Lacs
Hyderabad
Hybrid
Immediate Openings on # Senior ServiceNow Consultant _ Pan India_ Contract Experience:5+ Years Skill: Senior ServiceNow Consultant Notice Period: Immediate Employment Type: Contract Work Mode: WFO/Hybrid Job Description : Bachelor's degree in Computer Science, Information Technology, or related field. 5+ years of hands-on experience with the ServiceNow platform, including architecture, design, development, and deployment across multiple modules. 3+ years of experience developing and managing integrations with ServiceNow and other systems and technologies. In-depth technical knowledge of integration protocols and technologies, including REST APIs, SOAP APIs, JDBC, LDAP, and others. Experience with other integration tool sets such as Workato, Apptus or Snaplogic is a big plus. Proven experience leading successful ServiceNow implementations from start to finish
Posted 1 month ago
4.0 - 9.0 years
4 - 8 Lacs
Bengaluru
Work from Office
Responsibilities:Design, develop, and maintain data pipelines using Snowflake, DBT, and AWS.Collaborate with cross-functional teams to understand data requirements and deliver solutions.Optimize and troubleshoot existing data workflows to ensure efficiency and reliability.Implement best practices for data management and governance.Stay updated with the latest industry trends and technologies to continuously improve our data infrastructure.Required Skills: Proficiency in Snowflake, DBT, and AWS.Experience with data modeling, ETL processes, and data warehousing.Strong problem-solving skills and attention to detail.Excellent communication and teamwork abilities.Preferred Skills: Knowledge of Fivetran (HVR) and Python.Familiarity with data integration tools and techniques.Ability to work in a fast-paced and agile environment.Education:Bachelor's degree in Computer Science, Information Technology, or a related field.
Posted 1 month ago
5.0 - 10.0 years
5 - 9 Lacs
Hyderabad, Bengaluru
Hybrid
Immediate Openings on AWS Cloud Developer _ Bengaluru _Contract Skill: AWS Cloud Developer Notice Period: Immediate . Employment Type: Contract Job Description Plan, design, and execute end-to-end data migration projects from various data sources to Amazon Aurora PostgreSQL and Amazon DynamoDB. Collaborate with cross-functional teams to understand data requirements, perform data analysis, and define migration strategies. Develop and implement data transformation and manipulation procedures using Talend, AWS Glue, and AWS DMS to ensure data accuracy and integrity during the migration process. Optimize data migration workflows for efficiency and reliability, and monitor performance to identify and address potential bottlenecks. Collaborate with database administrators, data engineers, and developers to troubleshoot and resolve data-related issues. Ensure adherence to best practices, security standards, and compliance requirements throughout the data migration process. Provide documentation, technical guidance, and training to team members and stakeholders on data migration procedures and best practices.
Posted 1 month ago
5.0 - 6.0 years
7 - 8 Lacs
Kolkata
Work from Office
Use Talend Open Studio to design, implement, and manage data integration solutions. Develop ETL processes to ensure data is accurately extracted, transformed, and loaded into various systems for analysis.
Posted 1 month ago
4.0 - 8.0 years
6 - 10 Lacs
Mumbai
Work from Office
Design and optimize ETL workflows using Talend. Ensure data integrity and process automation.
Posted 1 month ago
4.0 - 6.0 years
6 - 8 Lacs
Mumbai
Work from Office
Develop and maintain data-driven applications using Scala and PySpark. Work with large datasets, performing data analysis, building data pipelines, and optimizing performance.
Posted 1 month ago
4.0 - 5.0 years
6 - 7 Lacs
Bengaluru
Work from Office
Develop and manage data pipelines using Snowflake. Optimize performance and data warehousing strategies.
Posted 1 month ago
6.0 - 7.0 years
3 - 7 Lacs
Hyderabad
Work from Office
We are looking for a skilled AWS Data Engineer with 6 to 7 years of experience to join our team at IDESLABS PRIVATE LIMITED. The ideal candidate will have a strong background in designing and implementing data pipelines on AWS. Roles and Responsibility Design, develop, and maintain large-scale data pipelines using AWS services such as S3, Lambda, Step Functions, etc. Collaborate with cross-functional teams to identify and prioritize project requirements. Develop and implement data quality checks and validation processes to ensure data integrity. Optimize data processing workflows for performance, scalability, and cost-effectiveness. Troubleshoot and resolve complex technical issues related to data engineering projects. Ensure compliance with industry standards and best practices for data security and privacy. Job Requirements Strong understanding of AWS ecosystem including S3, Lambda, Step Functions, Redshift, Glue, Athena, etc. Experience with data modeling, data warehousing, and ETL processes. Proficiency in programming languages such as Python, Java, or Scala. Excellent problem-solving skills and attention to detail. Ability to work collaboratively in a fast-paced environment. Strong communication and interpersonal skills.
Posted 1 month ago
5.0 - 10.0 years
8 - 14 Lacs
Hyderabad
Work from Office
#Employment Type: Contract 1. 5+ Years in ETL Domain Development (in which 3 plus years in Talend) 2. Strong in SQL Queries Writing (Mandate) 3. Hands on Trouble Shooting SQL Queries (Mandate) 4. Hands-on Talend Deployments Development (Mandate) 5. Strong in DWH Concepts (Mandate)
Posted 1 month ago
6.0 - 11.0 years
10 - 20 Lacs
Hyderabad
Work from Office
#Employment Type: Contract SQL DBA with Azure PaaS Hands-on experience designing, implementing and maintaining on-premise and Azure PaaS SQL databases required. Experience managing Postgres, CosmosDB and Redshift preferred. Strong scripting and automation skills required Experience auditing environments and making/implementing recommendations Experience supporting a 24/7/365 production environment including On-Call support Ability to work independently without a high level of Subway intervention or management
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough