You will be responsible for building unique user experiences across solutions and platforms that leverage state-of-the-art AI/ML models. Your role will involve designing and implementing the architecture for complex single-page applications (SPAs) to ensure high performance and responsiveness. Additionally, you will develop reusable components and libraries for future use. Collaborating with UX/UI designers, you will translate designs and wireframes into high-quality code. It will be your responsibility to implement and advocate for best practices in frontend development, including code standards, testing, and documentation. You will also develop and maintain automated tests to ensure code quality and reliability, and implement security best practices while ensuring compliance with data protection regulations. Key Requirements: - Experience in building performant applications in React, Angular, or Vue.js. - In-depth understanding of HTML5 and CSS3. - Strong understanding of NodeJS. - Experience in using libraries such as Bootstrap and Tailwind. - Experience in building applications using server-side rendering frameworks such as Next.js. - Experience in deploying applications on the cloud and using Content Delivery Networks (CDNs). - Experience in Continuous Integration/Continuous Deployment (CI/CD) setup for frontend applications. - Experience in frameworks such as React Native for mobile app development. - Strong understanding of Sales, Order Management, Finance, and Customer Service processes. - Experience with Enterprise Resource Planning (ERP) systems and Customer Relationship Management (CRM) platforms, particularly those that support Sales and Customer Service functions. Please note that the above information is referenced from hirist.tech.,
As a Data Engineer, you will be responsible for building highly scalable, fault-tolerant distributed data processing systems that handle hundreds of terabytes of data daily and manage petabyte-sized data warehouses and Elasticsearch clusters. Your role will involve developing quality data solutions and simplifying existing datasets into self-service models. Additionally, you will create data pipelines that enhance data quality and are resilient to unreliable data sources. You will take ownership of data mapping, business logic, transformations, and data quality, and engage in low-level systems debugging and performance optimization on large production clusters. Your responsibilities will also include participating in architecture discussions, contributing to the product roadmap, and leading new projects. You will be involved in maintaining and supporting existing platforms, transitioning to newer technology stacks, and ensuring the evolution of the systems. To excel in this role, you must demonstrate proficiency in Python and PySpark, along with a deep understanding of Apache Spark, Spark tuning, RDD creation, and data frame building. Experience with big data technologies such as HDFS, YARN, Map-Reduce, Hive, Kafka, Spark, Airflow, and Presto is essential. Moreover, you should have expertise in building distributed environments using tools like Kafka, Spark, Hive, and Hadoop. A solid grasp of distributed database systems" architecture and functioning, as well as experience working with file formats like Parquet and Avro for handling large data volumes, will be beneficial. Familiarity with one or more NoSQL databases and cloud platforms like AWS and GCP is preferred. The ideal candidate for this role will have at least 5 years of professional experience as a data or software engineer. This position offers a challenging opportunity to work on cutting-edge technologies and contribute to the development of robust data processing systems. If you are passionate about data engineering and possess the required skills and experience, we encourage you to apply and join our dynamic team.,